56 research outputs found

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations

    Nuclear Environments Inspection with Micro Aerial Vehicles: Algorithms and Experiments

    Full text link
    In this work, we address the estimation, planning, control and mapping problems to allow a small quadrotor to autonomously inspect the interior of hazardous damaged nuclear sites. These algorithms run onboard on a computationally limited CPU. We investigate the effect of varying illumination on the system performance. To the best of our knowledge, this is the first fully autonomous system of this size and scale applied to inspect the interior of a full scale mock-up of a Primary Containment Vessel (PCV). The proposed solution opens up new ways to inspect nuclear reactors and to support nuclear decommissioning, which is well known to be a dangerous, long and tedious process. Experimental results with varying illumination conditions show the ability to navigate a full scale mock-up PCV pedestal and create a map of the environment, while concurrently avoiding obstacles.Comment: 10 pages, ISER 201

    Autonomous Underwater Intervention: Experimental Results of the MARIS Project

    Get PDF
    open11noopenSimetti, E. ;Wanderlingh, F. ;Torelli, S. ;Bibuli, M. ;Odetti, A. ;Bruzzone, G. ; Lodi Rizzini, D. ;Aleotti, J. ;Palli, G. ;Moriello, L. ;Scarcia, U.Simetti, E.; Wanderlingh, F.; Torelli, S.; Bibuli, M.; Odetti, Angelo; Bruzzone, G.; Lodi Rizzini, D.; Aleotti, J.; Palli, G.; Moriello, L.; Scarcia, U

    Toward Future Automatic Warehouses: An Autonomous Depalletizing System Based on Mobile Manipulation and 3D Perception

    Get PDF
    This paper presents a mobile manipulation platform designed for autonomous depalletizing tasks. The proposed solution integrates machine vision, control and mechanical components to increase flexibility and ease of deployment in industrial environments such as warehouses. A collaborative robot mounted on a mobile base is proposed, equipped with a simple manipulation tool and a 3D in-hand vision system that detects parcel boxes on a pallet, and that pulls them one by one on the mobile base for transportation. The robot setup allows to avoid the cumbersome implementation of pick-and-place operations, since it does not require lifting the boxes. The 3D vision system is used to provide an initial estimation of the pose of the boxes on the top layer of the pallet, and to accurately detect the separation between the boxes for manipulation. Force measurement provided by the robot together with admittance control are exploited to verify the correct execution of the manipulation task. The proposed system was implemented and tested in a simplified laboratory scenario and the results of experimental trials are reported

    Underwater intervention robotics: An outline of the Italian national project Maris

    Get PDF
    The Italian national project MARIS (Marine Robotics for Interventions) pursues the strategic objective of studying, developing, and integrating technologies and methodologies to enable the development of autonomous underwater robotic systems employable for intervention activities. These activities are becoming progressively more typical for the underwater offshore industry, for search-and-rescue operations, and for underwater scientific missions. Within such an ambitious objective, the project consortium also intends to demonstrate the achievable operational capabilities at a proof-of-concept level by integrating the results with prototype experimental systems

    Grasp Programming by Demonstration in Virtual Reality with Automatic Environment Reconstruction

    No full text
    A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer

    Visualization of AGV in Virtual Reality and Collision Detection with Large Scale Point Clouds

    No full text
    Virtual reality (VR) will play an important role in the factory of the future. In this paper, an immersive and interactive VR system is presented for3D visualization of automated guided vehicles (AGVs) moving in a warehouse. The environment model consists of a large scale point cloud obtained through a Terrestrial Laser Scanning (TLS) survey. Realistic AGV animation is achieved thanks to the extraction of an accurate model of the ground. Visualization of AGV safety zones is also supported.Moreover, the system enables real-time collision detection between the 3D vehicle model and the point cloud model of the environment. Collision detection is useful for checking the feasibility of a specified vehicle path. Efficient techniques for dynamic loading of massive point cloud data have been developed to speed up rendering and collision detection. The VR system can be used to assist the design of automated warehouses and to show customers what their future industrial plant would look like
    • …
    corecore